Session List

Opening address & Welcome to Country


Keynote - What makes researchers thrive?


This presentation will reflect on the habits and behaviours that seem to distinguish researchers who flourish in the early stages of their career, and over the long term. Drawing on experience supervising research students and leading HDR programs, this talk explores what actually predicts growth and resilience in research careers; and why it is often not what people expect.
Vim text editing for research


Vim is a powerful, keyboard-driven text editor that enables fast and efficient editing, particularly in cloud-based environments where graphical tools are limited. Vim integrates well with tools such as LaTeX, Git, and command-line utilities, making it useful for writing, coding, and data manipulation. This 30-minute beginner-friendly session introduces Vim fundamentals in the context of research workflows, focusing on modal editing, navigation, and core commands. Participants will learn practical techniques to navigate and edit files, and begin building a flexible workflow for coding and text-based research tasks.
A Basic Guide to Data Visualisation Using ggplot2


Working with frequency mismatch between dependent and independent variables


In finance research, one of the most significant issues is the inconsistency in the frequency of dependent and independent variables. This problem is more crucial when incorporating macroeconomic factors into research using financial indices whose data is mostly daily. Some scholars employ an interpolation process to address the issue; nevertheless, it can not restore the nature of macroeconomic data and makes the estimation biased. This hands-on workshop addresses this data frequency mismatch using a quantitative method called GARCH-MIDAS. It will demonstrate how to conduct the GARCH-MIDAS regression in an equation where the dependent variable is daily data, while the independent variable is monthly. The estimation is performed in the academic version of EViews 14.
Who tips more? Find out with a hands-on Python visualization workshop


Often times a pattern or trend is hidden in data, but can you always spot it? A lot of times, data alone does not tell the whole story. In this hands-on session we will use a real world dataset on restaurant tipping behavior and find out whether bigger tables tip more generously, whether day of the week influences tipping and so on by creating different plots to visualise relationships between data. We will run cells in Google Colab to get instant charts, plots and visually see how the elements of data are related and how they influence each other.
Hello reproducibility! Build your first Nextflow workflow


Reproducibility shouldn’t be this mysterious thing we all aim for but rarely achieve. In this hands-on session, we’ll go through the basics of Nextflow using the “Hello Nextflow” tutorial as a starting point. You’ll build a simple pipeline step by step, see how processes fit together, and get a feel for how data moves through a workflow. The goal is to show how Nextflow helps you organise your analyses so they can be run again, shared, and scaled without everything breaking. If you’ve ever struggled with “it worked on my machine,” this session is for you.
Data Linkage: Unlocking Research Insights with the WA Data Linkage System (WADLS)


Data linkage transforms siloed datasets into a powerful, interconnected evidence base. This gives you the power to explore relationships between isolated datasets, opening the door to questions you could never answer otherwise. The Western Australian Data Linkage System (WADLS), based at the Department of Health, is among the most comprehensive and high‑quality linkage systems worldwide. It routinely integrates data from a broad range of Health and non‑Health sources and provides linked datasets to support research. This presentation will serve as an introduction to data linkage for researchers - how it works, why it matters, and how to access linked data.
Metadata matters: practical FAIR skills every researcher needs


Your research is only as valuable as your ability to find, understand, and reuse it, by you and others. Yet most researchers are never formally trained in creating high-quality metadata. This highly practical, hands-on workshop introduces the fundamentals of research data management and the FAIR Principles (Findable, Accessible, Interoperable, Reusable), with real-world examples. The session focuses on building skills you can immediately apply to your own data outputs and use to help others improve theirs. Participants will work directly with example datasets and are encouraged to bring their own to: • understand what makes metadata useful and why it’s often done poorly • learn how to write clear and reusable metadata • explore emerging approaches, including AI-assisted prompts, to support metadata creation By the end of the session, you will leave with practical tools, templates, and confidence to improve your data practices, enhancing the visibility, impact, and longevity of your research.
Applied data science for mining geochemistry with Python


This hands-on workshop explores how Python and data science workflows can support geochemical analysis in mining and environmental projects. Participants will learn how to transform laboratory spreadsheets into analysis-ready datasets, automate geochemical calculations and domain-specific plots used in the industry, perform exploratory data analysis, and generate technical visualisations and reports. The session emphasises reproducible workflows and transparent data processing practices increasingly expected in both research and industry. Using realistic datasets, attendees will gain practical experience with simple, interpretable techniques that form the foundation of more advanced approaches such as machine learning and deep learning. The workshop is designed for beginners and provides a foundation for geoscientists interested in applying Python to environmental and mining geochemistry.
When Data Connects: Improving Research with Linked Data


This presentation explores how linked administrative and routinely collected data can be used more effectively across research. It explains how linking datasets allows researchers to connect information from multiple sources, enabling richer, longitudinal, and more complete analyses. The talk highlights common barriers such as difficulty discovering what data exists, complex access processes, and limited early-stage visibility of available datasets. It introduces resources for researchers, including tools to improve data discoverability, access, and usability, guided support, and training to help researchers integrate linked data more easily into study design, analysis, and long-term research workflows.
Meet People Where They Are: Designing Inclusive Research


Inclusive research takes more than good intentions. In this session, we share how Curtin's Engagement & Student Experience team made research participation more inclusive. Drawing on user research from 2025, we’ll explore flexible online and in-person methods, placement of feedback points where students already are, and targeted recruitment and incentive options to reach diverse participants. Walk away with practical ideas and food for thought for researchers seeking to broaden participation through a user experience lens and beyond traditional research frameworks.
From Research to Relevance: Storytelling and communicating your work with impact


Participants will transform their own research into a clear, audience-specific narrative by structuring it as a story, adapting it for different audiences (e.g. public, industry, policy), and refining their language to remove jargon while maintaining credibility and expertise.
How do I move my truckload of data?


Have you ever wondered why, thanks to your fast internet at home you can stream movies in UltraHD smoothly, but your data upload can be super slow? Sometimes sharing data with collaborators can be tedious, slow and frustrating. This workshop addresses some data handling solutions in a research context. The workshop will introduce some research data movement tools, with a hands-on introduction to FileSender and a peek into Globus, a service that enables large-scale data transfers. By the end of this workshop, you should be able to: (1) Understand various network and connectivity constraints, (2) Transfer large amounts of data via the network, (3) Search for more advanced options for data movement and know where to go for help. Come along and learn how to make data transfer easy and convenient for you.
Beyond Point-and-Click: Reproducible Statistical Workflows in Stata


This session introduces Stata as a powerful tool for statistical analysis, with a focus on how it supports reproducible research. Moving beyond point-and-click workflows, it will explore how do-files enable you to automate analyses, and how log files can be used to create a record of your work. Through live demonstrations, you will learn how to structure analyses, manage data and generate results that can be easily shared and reproduced. Designed for beginners or anyone curious about using Stata, this interactive session will highlight how it can improve efficiency, collaboration and research integrity.
The invisible glue: Trust and Identity for research and innovation


This session will discuss the purpose of the Australian Access Federation, the value we add to the research and innovation ecosystem, and what tools and services currently exist to support the research community. We will also discuss our roadmap of infrastructure improvements and how the community can be involved in co-designing future trust and identity infrastructure.
Panel Discussion: Research Assessment


Research assessment frameworks are shifting, raising questions about how research quality and impact are defined, measured, and defended. This panel explores how universities currently assess research success, and how emerging approaches (including policy and practice impact, industry and community engagement, open research, and use beyond academia) may reshape evaluation in the future. Panellists will discuss what evidence matters for career progression, impact, and funding, and how to balance changing frameworks with strong fundamental research practices. Chaired by Kaitlyn Houston (Murdoch University)
Keynote


Panel Discussion: AI in Research - what is coming


As AI tools become embedded in research practice, questions are emerging about impact, ethics, authorship, and the changing role of researchers themselves. This panel explores what’s coming next for AI in research deliverables, communication, and data science. Panellists will discuss how AI is being used to support writing, visualisation, interpretation, and public engagement, alongside human‑in‑the‑loop (HITL) approaches. The session invites perspectives on AI ethics, research workflows, and how researchers can adopt AI critically and responsibly for research impact and innovation. Chaired by Dr. Ranjodh B. Singh (Curtin University)
AI in precision agriculture


Transforming Archives into Data with Gale’s Digital Scholar Lab


The Gale Digital Scholar Lab provides a transformative platform for teaching and learning by integrating digital humanities methodologies with primary source archives. By enabling students and researchers to curate, clean, and analyze large datasets of historical texts, the Lab fosters critical thinking, data literacy, and interdisciplinary research skills. Its user-friendly tools for text mining, sentiment analysis, and visualization empower learners to uncover new insights while engaging directly with authentic primary materials.
Value of human-in-the-loop AI for research


This session will discuss the importance of a human-in-the-loop (HITL) approach to AI for research. HITL provides greater autonomy, responsibility, reliability and transparency over how the AI works and its outputs, which are in accord with the AI ethics principles in Australia and internationally. There will also be a live demonstration of an AI platform that uses HITL machine learning on textual data. The user can directly change the AI’s answers and improvements become immediately and permanently part of the AI’s learning. Further advantages of this human-centred AI platform include more flexibility, privacy, security, cost effectiveness and environmental sustainability.
Hands-on Introduction to CPU Parallel Programming with OpenMP.


This hands-on workshop introduces OpenMP, a widely used shared-memory parallel programming model. Participants will learn how to enable CPU parallelism by creating and coordinating threads, managing data sharing, parallelising loops, balancing workloads, and ensuring correct synchronisation. The session also covers key OpenMP constructs and execution models. It is designed for participants with prior experience in C, C++, or Fortran who want to improve performance through parallel programming.
Panel Discussion


Sensor/wearable AI data integration


Social Media Analytics: From TikTok to Human Insights


Social media has evolved from a communication tool into a powerful part of everyday life, shaping how people interact, make decisions, and express themselves. At the same time, the vast amount of multimodal and largely unstructured data generated on these platforms presents significant challenges for analysis. This session unpacks the social media analytics workflow, from data collection and analysis to the generation of meaningful insights. Drawing on examples from digital multimodal data, the session explores how social media can serve as a rich source for understanding human behaviour and social phenomena. By bridging data-driven methods with social science perspectives, the session demonstrates how valuable human insights can be extracted from complex online environments.
Synthetic Data Generation Program


Launched in late 2022, Synthetic Data Generation (SDG) Program with WA Department of Health (the Department) aimed to improve data accessibility, expand data capacity, and reduce the time for data provisioning. After three years’ development, the Department has established a Synthetic Health Database including diverse categories of synthetic data products. In addition, the Department has endorsed the Synthetic Data Governance and Technical Guidelines which guided the practices of synthetic data management, governance, and applications. We would like to share with the researchers the insights of synthetic data generation using machine learning models and synthetic data application pathways. In addition, the Department has endorsed the Synthetic Data Governance and Technical Guidelines which guided the practices of synthetic data management, governance, and applications. We would like to share with the researchers the insights of synthetic data generation using machine learning models and synthetic data application pathways.
Exploring the Microbiome through the Virome for Biological Insight


Understanding Data as a Non-Digital Scholar: Confusion to Clarity


Data literacy is not about coding or advanced analytics. It is about the ability to understand, interpret, question, and communicate information in meaningful ways. In today’s data-driven world the ability to understand and interpret data is no longer limited to technical experts but to everyone. For non-digital scholars, data literacy means using data as a tool to strengthen arguments, support decision-making, and enhance understanding across different contexts. This talk is designed to simplify data literacy by breaking down key concepts into accessible and practical insights. It aims to build confidence by showing that meaningful engagement with data does not require advanced technical skills, but rather a clear understanding of how to interpret and apply information in research and decision-making.
Empowering Research Impact with AI: Practical Tools for Modern Communication


As research faces growing demands for visibility and real-world influence, AI tools are transforming how findings are communicated. This 90-minute hands-on workshop introduces accessible AI applications for crafting compelling abstracts, visuals, summaries, public narratives, and audience-tailored outputs. Participants will practice tools for drafting, editing, data visualisation, and ethical use (addressing accuracy, bias, and attribution). With a focus on Gen Z researchers - who are digital natives leading AI adoption in daily workflows - this session equips attendees across disciplines to boost impact, funding, and outreach. No prior AI experience needed; bring your laptop for interactive exercises.
FAIR, Attributed, Harmonized and AI-Ready by Design: Turning Instrument Output into a Strategic Institutional Asset


LabArchives Introduces Luma Lab Connect — Enterprise Research Data Management (RDM) Gateway for Shared Resources. Academic and Research institutions are not short on data. They are short on attribution, continuity, context, standardization and institutional visibility across the data their Shared Research Resources and lab-owned instruments already generate. Join LabArchives for a webinar on how the Luma Lab Connect service is designed to help institutions aggregate, harmonize, standardize, and govern instrument data to support reproducibility, grant funding & accountability, FAIR readiness, AI-readiness, and stronger operational intelligence - without rip-and-replace infrastructure changes.
Can AI Tell If You’re Empathic? Detecting Empathy from Text and Video Interactions


Empathy detection is an emerging topic at the intersection of natural language processing, computer vision and psychology. In this presentation, we explore how foundation models can address key barriers in this field. We will see that large language models can guide smaller language models for text-based empathy detection and achieve state-of-the-art results on benchmark tasks. Next, we will explore uncertainty quantification in language modelling to enable robust and trustworthy empathy computing systems. For video-based empathy detection, we will discuss how tabular foundation models achieve strong accuracy and cross-subject generalisation while respecting privacy constraints. We will conclude with potential applications of such empathy detection systems across various real-life scenarios.
Enhancing Research Efficiency with SCOPUS AI


This 30-minute hands-on workshop introduces researchers to the practical use of SCOPUS AI for improving literature searches and research analysis. Participants will learn how to use AI-assisted tools to identify relevant publications, map key themes, and generate concise research summaries. The session includes guided exercises demonstrating how SCOPUS AI can help refine search strategies, uncover emerging trends, and support evidence-based writing. Suitable for academics, postgraduates, and research staff looking to integrate AI capabilities into their research workflow for greater efficiency and insight.
Keynote


Accelerate Your Research with Cloud Computing on the ARDC Nectar Research Cloud


Don't let limited computing power slow down your research. The ARDC Nectar Research Cloud provides fast and scalable computing resources tailored specifically for research. Whether you need to run intensive data analyses and complex simulations, train AI and ML models, manage big data or collaborate seamlessly across institutions, Nectar gives you the computational power to scale up your work. Join us for an interactive introduction to Nectar, where we'll cover: * What is cloud computing and how can it accelerate my research? * Real-world case studies powered by Nectar. * Guidance on accessing tutorials and ongoing support. * Live Q&A to answer your questions. No cloud computing or coding experience is required to attend.
Accessing and Analysing Research Trends with Python/R and Academic APIs


This session introduces participants to using academic APIs, including Web of Science Starter and Expanded APIs, to retrieve and analyse research metadata using ready-to-use Python and R templates. Participants will learn : • how academic APIs return structured metadata • how to make basic queries using existing Python/R templates • how to extract common metadata fields (titles, keywords, authors, citation counts) • how to interpret results for trend detection and generate simple visualisations of topic evolution using prebuilt examples. All examples are beginner friendly, and all code will be provided for post-workshop use. This session is cross-disciplinary and relevant across sciences, social sciences, and humanities.
Work Smarter, Not Harder: The Researcher’s Guide to national digital research tools, services and training


Discover how Australia’s National Research Infrastructure (NRI) can support you in your research journey. NRI is a network of tools, data platforms and expert support designed to make research easier and more effective. Through a series of lightning talks, you will be introduced to key NRI tools and services, including high performance computing, national scale data platforms, bioinformatics, data linkage, high-quality data assets and cloud computing. You will learn how these can help you optimise your workflow, overcome common research challenges and save time on routine tasks. You will also find out where to access training and support, so you can confidently use these national resources to accelerate your research. Co-Speaker details - Meirian Lovelace-Tozer, Skills Development Lead (Services), Australia Research Data Commons (ARDC) - Sarah Thomas, Portfolio Manager, Australian Access Federation (AAF) - Dr Sara King, Training and Engagement Lead, AARNet - Mzingisi Mqhum, Stakeholder Engagement Manager, Australian Urban Research Infrastructure Network (AURIN) - Dr Ludovic Capelli, Training and Education Manager, Pawsey Supercomputing Research Centre - Dr Kate Miller, Coordinator, Senior Specialist, Science and Partnerships, Population Health Research Network (PHRN) Contributed to content - Dr Abduallah Shaikh, Digital Skills Development Manager, National Computational Infrastructure (NCI) - Dr Paige Martin, Team Lead, User Training, ACCESS-NRI - Dr Melissa Burke, Training Manager, Australian BioCommons
Community choreography: Practical facilitation skills for collaborative researchers


A thriving research culture is only as strong as the community that supports it. Yet the essential soft skills required to facilitate teams, build Communities of Practice, and support your peers are rarely formally taught. Most of us end up learning these critical leadership skills by accident or through absolute disasters. This highly practical workshop introduces fundamentals of community facilitation for postgraduate students. We will explore a vital blend of pedagogy and performativity to get people engaged and keep them working together effectively. Drawing on evidence-based research and ARDC skills community mentoring strategies, this session builds skills you can immediately apply to uplift your own research networks. Participants will gain practical tips, peer-training guides and evocative metaphors such as the choreography of icebergs.
DALiuGE: A Scientific Workflow Eco-System for Small and (Very) Large Scale Processing


Panel Discussion: Research to Impact - Communicating for Change


Research and research data generate valuable insights, but how do those insights move beyond disciplinary boundaries and into practice? This panel explores how research skills and expertise are translated across different contexts, and how we support learning to become meaningful real‑world action. Panellists will discuss communities of practice, facilitation, and skills‑focused learning spaces. This includes how complex information—data, evidence, and analysis—is communicated to practitioners, industry, policy makers, and communities. The discussion will consider how interpretation and sense‑making shape confidence and capability, how impact can be recognised and evaluated, and how institutions can create spaces for cross‑disciplinary skill‑building that leads to change.
Sponsor (Forrest Research Foundation)


Digital Tools for HASS and GLAM research


Are you a passionate researcher active in humanities, arts, social sciences (HASS) or Indigenous research? Would you like to learn about tools, platforms and services available right now that will accelerate your research? Join us for • An overview of the ARDC’s HASS and Indigenous Research Data Commons • An introduction to digital tools supporting HASS and Indigenous research in Australia with a focus on tools for text analytics, including offerings from the Language Data Commons of Australia (LDaCA) • An interactive discussion about Digital GLAM and HASS Tools & Services and what you are using.
ResBaz Close



Telling Stories, Saving Species: Elevating Publication Impact Through Narrative Storytelling


Carpentries: The Unix Shell


Carpentries: Version Control with Git


Carpentries: R for Reproducible Scientific Analysis


An Enquiry into the Triple Bottom Line of Sustainability


Since its formal endorsement by the United Nations General Assembly in 1987, following the Brundtland Report, Sustainable Development (SD) has become a dominant global policy framework. Its evolution has included key methodological shifts, notably the adoption of the Triple Bottom Line (TBL) in 1997 to integrate economic, social, and environmental dimensions. However, TBL has been widely critiqued for its limited capacity to address cultural contexts. This workshop critically examines these limitations and proposes the Quadruple Bottom Line (QBL) as an expanded framework, emphasising culture as an essential pillar for more inclusive and context-sensitive sustainability practice.
Introduction to coding agents (Claude Code)


This talk introduces coding agents, focusing on Claude Code. Coding agents are AI tools that use Large Language Models to interpret natural language and perform tasks. Agents can help researchers write code, analyse data, plan research, and prototype ideas. The session will show how Claude Code works and how it can be used in research workflows. It will demonstrate effective use of Claude Code and highlight its limitations.